Goto

Collaborating Authors

 psychological theory


Algorithm selection by rational metareasoning as a model of human strategy selection

Falk Lieder, Dillon Plunkett, Jessica B. Hamrick, Stuart J. Russell, Nicholas Hay, Tom Griffiths

Neural Information Processing Systems

Selecting the right algorithm is an important problem in computer science, because the algorithm often has to exploit the structure of the input to be efficient. The human mind faces the same challenge. Therefore, solutions to the algorithm selection problem can inspire models of human strategy selection and vice versa. Here, we view the algorithm selection problem as a special case of metareasoning and derive a solution that outperforms existing methods in sorting algorithm selection. We apply our theory to model how people choose between cognitive strategies and test its prediction in a behavioral experiment. We find that people quickly learn to adaptively choose between cognitive strategies. People's choices in our experiment are consistent with our model but inconsistent with previous theories of human strategy selection. Rational metareasoning appears to be a promising framework for reverse-engineering how people choose among cognitive strategies and translating the results into better solutions to the algorithm selection problem.


Algorithm selection by rational metareasoning as a model of human strategy selection

Neural Information Processing Systems

Selecting the right algorithm is an important problem in computer science, because the algorithm often has to exploit the structure of the input to be efficient. The human mind faces the same challenge. Therefore, solutions to the algorithm selection problem can inspire models of human strategy selection and vice versa. Here, we view the algorithm selection problem as a special case of metareasoning and derive a solution that outperforms existing methods in sorting algorithm selection. We apply our theory to model how people choose between cognitive strategies and test its prediction in a behavioral experiment. We find that people quickly learn to adaptively choose between cognitive strategies. People's choices in our experiment are consistent with our model but inconsistent with previous theories of human strategy selection. Rational metareasoning appears to be a promising framework for reverse-engineering how people choose among cognitive strategies and translating the results into better solutions to the algorithm selection problem.


AI, Decision Science, and Psychological Theory in Decisions about People: A Case Study in Jury Selection

AI Magazine

AI theory and its technology is rarely consulted in attempted resolutions of social problems. Solutions often require that decision-analytic techniques be combined with expert systems. The emerging literature on combined systems is directed at domains where the prediction of human behavior is not required. A foundational shift in AI presuppositions to intelligent agents working in collaboration provides an opportunity to explore efforts to improve the performance of social institutions that depend on accurate prediction of human behavior. Professionals concerned with human outcomes make decisions that are intuitive or analytic or some combination of both.


Cognitive Model Priors for Predicting Human Decisions

Bourgin, David D., Peterson, Joshua C., Reichman, Daniel, Griffiths, Thomas L., Russell, Stuart J.

arXiv.org Machine Learning

Human decision-making underlies all economic behavior. For the past four decades, human decision-making under uncertainty has continued to be explained by theoretical models based on prospect theory, a framework that was awarded the Nobel Prize in Economic Sciences. However, theoretical models of this kind have developed slowly, and robust, high-precision predictive models of human decisions remain a challenge. While machine learning is a natural candidate for solving these problems, it is currently unclear to what extent it can improve predictions obtained by current theories. We argue that this is mainly due to data scarcity, since noisy human behavior requires massive sample sizes to be accurately captured by off-the-shelf machine learning methods. To solve this problem, what is needed are machine learning models with appropriate inductive biases for capturing human behavior, and larger datasets. We offer two contributions towards this end: first, we construct "cognitive model priors" by pretraining neural networks with synthetic data generated by cognitive models (i.e., theoretical models developed by cognitive psychologists). We find that fine-tuning these networks on small datasets of real human decisions results in unprecedented state-of-the-art improvements on two benchmark datasets. Second, we present the first large-scale dataset for human decision-making, containing over 240,000 human judgments across over 13,000 decision problems. This dataset reveals the circumstances where cognitive model priors are useful, and provides a new standard for benchmarking prediction of human decisions under uncertainty.


Psychological Forest: Predicting Human Behavior

Plonsky, Ori (Technion - Israel Institute of Technology) | Erev, Ido (Technion - Israel Institute of Technology) | Hazan, Tamir (Technion - Israel Institute of Technology) | Tennenholtz, Moshe (Technion - Israel Institute of Technology)

AAAI Conferences

We introduce a synergetic approach incorporating psychological theories and data science in service of predicting human behavior. Our method harnesses psychological theories to extract rigorous features to a data science algorithm. We demonstrate that this approach can be extremely powerful in a fundamental human choice setting. In particular, a random forest algorithm that makes use of psychological features that we derive, dubbed psychological forest, leads to prediction that significantly outperforms best practices in a choice prediction competition. Our results also suggest that this integrative approach is vital for data science tools to perform reasonably well on the data. Finally, we discuss how social scientists can learn from using this approach and conclude that integrating social and data science practices is a highly fruitful path for future research of human behavior.


Algorithm selection by rational metareasoning as a model of human strategy selection

Lieder, Falk, Plunkett, Dillon, Hamrick, Jessica B., Russell, Stuart J., Hay, Nicholas, Griffiths, Tom

Neural Information Processing Systems

Selecting the right algorithm is an important problem in computer science, because the algorithm often has to exploit the structure of the input to be efficient. The human mind faces the same challenge. Therefore, solutions to the algorithm selection problem can inspire models of human strategy selection and vice versa. Here, we view the algorithm selection problem as a special case of metareasoning and derive a solution that outperforms existing methods in sorting algorithm selection. We apply our theory to model how people choose between cognitive strategies and test its prediction in a behavioral experiment. We find that people quickly learn to adaptively choose between cognitive strategies. People's choices in our experiment are consistent with our model but inconsistent with previous theories of human strategy selection. Rational metareasoning appears to be a promising framework for reverse-engineering how people choose among cognitive strategies and translating the results into better solutions to the algorithm selection problem.


AI, Decision Science, and Psychological Theory in Decisions about People: A Case Study in Jury Selection

Lachman, Roy

AI Magazine

The emerging literature on combined systems is directed at domains where the prediction of human behavior is not required. Professionals concerned with human outcomes make decisions that are intuitive or analytic or some combination of both. Justifications and methodology are presented for combining analytic and intuitive agents in an expert system that supports professional decision making. The system presented demonstrates the challenges and opportunities inherent in developing and using AI-collaborative technology to solve social problems.